31 research outputs found

    Lower Bounds on the Redundancy of Huffman Codes with Known and Unknown Probabilities

    Full text link
    In this paper we provide a method to obtain tight lower bounds on the minimum redundancy achievable by a Huffman code when the probability distribution underlying an alphabet is only partially known. In particular, we address the case where the occurrence probabilities are unknown for some of the symbols in an alphabet. Bounds can be obtained for alphabets of a given size, for alphabets of up to a given size, and for alphabets of arbitrary size. The method operates on a Computer Algebra System, yielding closed-form numbers for all results. Finally, we show the potential of the proposed method to shed some light on the structure of the minimum redundancy achievable by the Huffman code

    Cross-color channel perceptually adaptive quantization for HEVC

    Get PDF
    HEVC includes a Coding Unit (CU) level luminance-based perceptual quantization technique known as AdaptiveQP. AdaptiveQP perceptually adjusts the Quantization Parameter (QP) at the CU level based on the spatial activity of raw input video data in a luma Coding Block (CB). In this paper, we propose a novel cross-color channel adaptive quantization scheme which perceptually adjusts the CU level QP according to the spatial activity of raw input video data in the constituent luma and chroma CBs; i.e., the combined spatial activity across all three color channels (the Y, Cb and Cr channels). Our technique is evaluated in HM 16 with 4:4:4, 4:2:2 and 4:2:0 YCbCr JCT-VC test sequences. Both subjective and objective visual quality evaluations are undertaken during which we compare our method with AdaptiveQP. Our technique achieves considerable coding efficiency improvements, with maximum BD-Rate reductions of 15.9% (Y), 13.1% (Cr) and 16.1% (Cb) in addition to a maximum decoding time reduction of 11.0%

    Accelerating BPC-PaCo through visually lossless techniques

    Get PDF
    Fast image codecs are a current need in applications that deal with large amounts of images. Graphics Processing Units (GPUs) are suitable processors to speed up most kinds of algorithms, especially when they allow fine-grain parallelism. Bitplane Coding with Parallel Coefficient processing (BPC-PaCo) is a recently proposed algorithm for the core stage of wavelet-based image codecs tailored for the highly parallel architectures of GPUs. This algorithm provides complexity scalability to allow faster execution at the expense of coding efficiency. Its main drawback is that the speedup and loss in image quality is controlled only roughly, resulting in visible distortion at low and medium rates. This paper addresses this issue by integrating techniques of visually lossless coding into BPC-PaCo. The resulting method minimizes the visual distortion introduced in the compressed file, obtaining higher-quality images to a human observer. Experimental results also indicate 12% speedups with respect to BPC-PaCo

    Standard and specific compression techniques for DNA microarray images

    Get PDF
    We review the state of the art in DNA microarray image compression and provide original comparisons between standard and microarray-specific compression techniques that validate and expand previous work. First, we describe the most relevant approaches published in the literature and classify them according to the stage of the typical image compression process where each approach makes its contribution, and then we summarize the compression results reported for these microarray-specific image compression schemes. In a set of experiments conducted for this paper, we obtain new results for several popular image coding techniques that include the most recent coding standards. Prediction-based schemes CALIC and JPEG-LS are the best-performing standard compressors, but are improved upon by the best microarray-specific technique, Battiato's CNN-based scheme

    High-Performance Lossless Compression of Hyperspectral Remote Sensing Scenes Based on Spectral Decorrelation

    Get PDF
    The capacity of the downlink channel is a major bottleneck for applications based on remotesensing hyperspectral imagery (HSI). Data compression is an essential tool to maximize the amountof HSI scenes that can be retrieved on the ground. At the same time, energy and hardware constraintsof spaceborne devices impose limitations on the complexity of practical compression algorithms.To avoid any distortion in the analysis of the HSI data, only lossless compression is considered in thisstudy. This work aims at finding the most advantageous compression-complexity trade-off withinthe state of the art in HSI compression. To do so, a novel comparison of the most competitive spectraldecorrelation approaches combined with the best performing low-complexity compressors of thestate is presented. Compression performance and execution time results are obtained for a set of47 HSI scenes produced by 14 different sensors in real remote sensing missions. Assuming onlya limited amount of energy is available, obtained data suggest that the FAPEC algorithm yields thebest trade-off. When compared to the CCSDS 123.0-B-2 standard, FAPEC is 5.0 times faster andits compressed data rates are on average within 16% of the CCSDS standard. In scenarios whereenergy constraints can be relaxed, CCSDS 123.0-B-2 yields the best average compression results of allevaluated methods

    Lossless compression of color filter array mosaic images with visualization via JPEG 2000

    Get PDF
    Digital cameras have become ubiquitous for amateur and professional applications. The raw images captured by digital sensors typically take the form of color filter array (CFA) mosaic images, which must be "developed" (via digital signal processing) before they can be viewed. Photographers and scientists often repeat the "development process" using different parameters to obtain images suitable for different purposes. Since the development process is generally not invertible, it is commonly desirable to store the raw (or undeveloped) mosaic images indefinitely. Uncompressed mosaic image file sizes can be more than 30 times larger than those of developed images stored in JPEG format. Thus, data compression is of interest. Several compression methods for mosaic images have been proposed in the literature. However, they all require a custom decompressor followed by development-specific software to generate a displayable image. In this paper, a novel compression pipeline that removes these requirements is proposed. Specifically, mosaic images can be losslessly recovered from the resulting compressed files, and, more significantly, images can be directly viewed (decompressed and developed) using only a JPEG 2000 compliant image viewer. Experiments reveal that the proposed pipeline attains excellent visual quality, while providing compression performance competitive to that of state-of-the-art compression algorithms for mosaic images

    Performance impact of parameter tuning on the CCSDS-123.0-B-2 low-complexity lossless and near-lossless multispectral and Hyperspectral Image Compression standard

    Get PDF
    This article studies the performance impact related to different parameter choices for the new CCSDS-123.0-B-2 Low-Complexity Lossless and Near-Lossless Multispectral and Hyperspectral Image Compression standard. This standard supersedes CCSDS-123.0-B-1 and extends it by incorporating a new near-lossless compression capability, as well as other new features. This article studies the coding performance impact of different choices for the principal parameters of the new extensions, in addition to reviewing related parameter choices for existing features. Experimental results include data from 16 different instruments with varying detector types, image dimensions, number of spectral bands, bit depth, level of noise, level of calibration, and other image characteristics. Guidelines are provided on how to adjust the parameters in relation to their coding performance impact

    Reducing data dependencies in the feedback loop of the CCSDS 123.0-B-2 predictor

    Get PDF
    Altres ajuts: European Space Agency (ESA) (Grant Number: 4000136723/22/NL/CRS)On-board multi- and hyperspectral instruments acquire large volumes of data that need to be processed with the limited computational and storage resources. In this context, the CCSDS 123.0-B-2 standard emerges as an interesting option to compress multi- and hyperspectral images on-board satellites, supporting both lossless and near-lossless compression with low complexity and reduced power consumption. Nonetheless, the inclusion of a feedback loop in the CCSDS 123.0-B-2 predictor to support near-lossless compression introduces significant data dependencies that hinder real-time processing, particularly due to the presence of a quantization stage within this loop. This work provides an analysis of the aforementioned data dependencies and proposes two strategies aiming at maximizing throughput in hardware implementations and thus enabling real-time processing. In particular, through an elaborate mathematical derivation, the quantization stage is removed completely from the feedback loop. This reduces the critical path, which allows for shorter initiation intervals in a pipelined hardware implementation and higher throughput. This is achieved without any impact in the compression performance, which is identical to the one obtained by the original data flow of the predictor

    Implementing the New CCSDS Housekeeping Data Compression Standard 124.0-B-1 (Based on POCKET+) on OPS-SAT-1

    Get PDF
    The number of telemetry parameters available in a typical spacecraft is constantly increasing. At the same time, the bandwidth available to download all that information is rather static. Operators must therefore make hard choices between which parameters to downlink or not, in which different situations, and at which sampling rates. This tradeoff is more problematic for missions with higher communication latency beyond LEO. Since 2009, The European Space Agency’s European Space Operations Center (ESA/ESOC) has been promoting the compression of housekeeping telemetry as a solution to this problem. Most spacecraft housekeeping telemetry parameters compress extremely well if they are pre-processed correctly. Unfortunately, most spacecraft record telemetry packets in flat packet stores so accessing different packets within them is too CPU and memory intensive for flight computers. Using traditional compression schemes such as zip or tar are not compatible with the traditional “fire and forget” mode of operation i.e., occasional packet losses are expected. This would render entire compressed files unusable. ESOC invented an algorithm called POCKET+ to solve this problem. It is implemented using very low-level processor instructions such as OR, XOR, AND, etc. This means that it can run with low CPU usage and, more importantly, with a short execution time. It is designed to run fast enough to compress a stream of incoming packets as they are generated by the on-board packetiser. The output is a smaller stream of packets. The compressed packets can be handled by the on-board system in an identical fashion to the original larger uncompressed packets. Robustness with respect to the occasional packet loss is built into the protocol and does not require a back channel. In 2018, POCKET+ was proposed to the CCSDS data compression working group and after extensive research by other agencies the core idea has been Evans 2 36th Annual Small Satellite Conference incorporated into a proposed new standard for “Robust Compression of Fixed Length Housekeeping Data.” The second supporter for the mission is CNES, supported technically by the University of Barcelona (UAB). Both CNES and UAB have suggested changes that make POCKET+ even more powerful. POCKET+ is already flying on OPSSAT, a 3U CubeSat launched by the European Space Agency on December 18th, 2019. The mission has updated the Onboard Software (OBSW) and ground control software to be compliant with the latest POCKET+ standard. The standard is set to be available for an ESA review. This paper describes the latest algorithm and how it is implemented on OPS-SAT, including how the same core software has been successfully deployed in two completely different scenarios/environments. One compresses files offline and then uses a transport protocol with a completeness guarantee; the other compresses a packet stream in real-time and uses the classic transport protocol where completeness is not guaranteed. The results show that compression ratios between eight and ten are usual for the OPSSAT mission. Improvements made during the development of the planned CCSDS standard for “Robust Compression of Fixed Length Housekeeping Data” are also presented

    Analysis-driven lossy compression of DNA microarray images

    Get PDF
    DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yield only limited compression performance (compression ratios below 2:1), whereas lossy coding methods may introduce unacceptable distortions in the analysis process. This work introduces a novel Relative Quantizer (RQ), which employs non-uniform quantization intervals designed for improved compression while bounding the impact on the DNA microarray analysis. This quantizer constrains the maximum relative error introduced into quantized imagery, devoting higher precision to pixels critical to the analysis process. For suitable parameter choices, the resulting variations in the DNA microarray analysis are less than half of those inherent to the experimental variability. Experimental results reveal that appropriate analysis can still be performed for average compression ratios exceeding 4.5:1
    corecore